1,052 research outputs found

    Master of Science

    Get PDF
    thesisCardiovascular disease (CVD) is a leading cause of morbidity and mortality for adults in the United States and has caused a significant amount of spending in healthcare. The guidelines issued by the National Cholesterol Education Program Adult Panel III emphasized the importance of low-density lipoprotein (LDL) with LDL reduction. Based on patient CV risk, the guidelines identified LDL as the primary target for cholesterol- lowering therapy. The guidelines also recommended the use of statins for both primary and secondary prevention based on patient risk profile. With strong evidence of effectiveness of statins in secondary prevention, the focus was on primary prevention. Many randomized clinical trials (RCTs) conducted to evaluate the effects of statin therapy have shown that statins lower LDL-C by 19% to 47%. However, statin therapy in real-world studies has not shown the same level of LDL-C reduction as seen in RCTs. In published studies, LDL-C goal attainment is defined as whether a patient achieves LDL-C levels based on their CVD risk, after a specific follow-up time or the end of the study. However, no studies have considered LDL-C as a modifiable risk factor that changes over time. Examining the association between LDL-C goal attainment and CVD in a time-dependent manner may provide a more accurate estimation of the association between LDL-C levels as modified by statin therapy. The research question of this study is whether more consistent LDL-C goal attainment reduces the incidence of CV events in primary prevention patients. The objectives of this study are to 1) identify quarterly LDL-C goal attainment per ATP-III guidelines in primary prevention patients in the real-world setting, and 2) to evaluate the relationship between the time-dependent LDL-C goal attainment and CVD outcomes. Results from this study suggested risk reduction of CVD risk with more consistent LDL-C goal attainment, highlighting the importance of pharmacotherapy with the right intensity of medications, as well as medication adherence. The findings presented here add to the knowledge of the association between LDL-C goal attainment and CV event risk reduction

    Towards Intelligent Data Acquisition Systems with Embedded Deep Learning on MPSoC

    Get PDF
    Large-scale scientific experiments rely on dedicated high-performance data-acquisition systems to sample, readout, analyse, and store experimental data. However, with the rapid development in detector technology in various fields, the number of channels and the data rate are increasing. For trigger and control tasks data acquisition systems needs to satisfy real-time constraints, enable short-time latency and provide the possibility to integrate intelligent data processing. During recent years machine learning approaches have been used successfully in many applications. This dissertation will study how machine learning techniques can be integrated already in the data acquisition of large-scale experiments. A universal data acquisition platform for multiple data channels has been developed. Different machine learning implementation methods and application have been realized using this system. On the hardware side, recent FPGAs do not only provide high-performance parallel logic but more and more additional features, like ultra-fast transceivers and embedded ARM processors. TSMC\u27s 16nm FinFET Plus (16FF+) 3D transistor technology enables Xilinx in the Zynq UltraScale+ FPGA devices to increase the performance/watt ratio by 2 to 5 times compared to their previous generation. The selected main processor ZU11EG owns 32 GTH transceivers where each one could operate up to 16.316.3 Gb/s and 16 GTY transceivers where each of them could operate up to 32.7532.75 Gb/s. These transceivers are routed to x16 lanes Gen 33/44 PCIe, 1212 lanes full-duplex FireFly electrical/optical data link and VITA 57.4 FMC+ connector. The new Zynq UltraScale+ device provides at least three major advantages for advanced data acquisition systems: First, the 16nm FinFET+ programmable logic (PL) provides high-speed readout capabilities by high-speed transceivers; second, built-in quad-core 64-bit ARM Cortex-A53 processor enable host embedded Linux system. Thus, webservers, slow control and monitoring application could be realized in a embedded processor environment; third, the Zynq Multiprocessor System-on-Chip technology connects programmable logic and microprocessors. In this thesis, the benefits of such architectures for the integration of machine learning algorithms in data acquisition systems and control application are demonstrated. On the algorithm side, there have been many achievements in the field of machine learning over the last decades. Existing machine learning algorithms split into several categories depending on how the learning phase is organized: Supervised Learning, Unsupervised Learning, Semi-Supervised Learning and Reinforcement Learning. Most commonly used in scientific applications are supervised learning and reinforcement learning. Supervised learning learns from the labelled input and output, and generates a function that could predict the future different input to the appropriate output. A common application instance is a classification. They have a wide difference in basic math theory, training, inference, and their implementation. One of the natural solutions is Application Specific Integrated Circuit (ASIC) Artificial Intelligence (AI) chips. A typical example is the Google Tensor Processing Unit (TPU), it could cover the training and inference for both supervised learning and reinforcement learning. One of the major issues is that such chip could not provide high data transferring bandwidth other than high compute power. As a comparison, the Xilinx UltraScale+ FPGA could also provide raw compute power and efficiency for all different data types down to a single bit. From a deployment point of view, the training part of supervised learning is typically performed by CPU/GPU/TPU on a fixed dataset. For reinforcement learning, the training phase is more complex. The algorithm needs to periodically interact with the controlled system and execute a Markov Decision Process (MDP). There is no static training dataset, but it is obtained in real-time. The time slot between each step depends on the dynamics of the controlled system. The inference is also bound to this sampling time because the algorithm needs to interact with the environment and decide the appropriate action for a response, then a higher demand on time is proposed. This thesis gives solutions for both training and inference of reinforcement learning. At first, the requirements are analyzed, then the algorithm is deduced from scratch, and training on the PS part of Zynq device is implemented, meanwhile the inference at FPGA side is proposed which is similar solution compared with supervised learning. The results for Policy Gradient show a lot of improvement over a CPU/GPU-based machine learning framework. The Deep Deterministic Policy Gradient also has improvement regarding both training latency and stability. This implementation method provides a low-latency approach for reinforcement learning on-field training process

    Consistency of Automated Market Makers

    Get PDF
    Decentralised Finance has popularised Automated Market Makers (AMMs), but surprisingly little research has been done on their consistency. Can a single attacker extract risk-free revenue from an AMM, regardless of price or other users\u27 behaviour? In this paper, we investigate the consistency of a large class of AMMs, including the most widely used ones, and show that consistency holds

    The POA Application in the Teaching of Chinese Writing as a Foreign Language

    Get PDF
    The old style of traditional teaching mode is to take “text as the core”. A whole new teaching approach was put forward by a famous professor and the approach is named as production oriented approach (POA), which is pulling people’s attention on both “input” and “production”. From the essence of these two elements, we know that it is for sure the innovation and development of traditional teaching mode is great. Also, there is also a disjointed situation between learning and using during the teaching. Based on the POA, this paper selects lesson 25 “developing Chinese intermediate Writing II” as an example, expects to explore the effective way of production so as to provide a series of reference for the subsequent writing teaching design from three aspects including driving, facilitating and evaluating

    Large zeta sums

    Full text link
    In this article, we investigate the behaviour of values of zeta sums nxnit\sum_{n\le x}n^{it} when tt is large. We show some asymptotic behaviour and Omega results of zeta sums, which are analogous to previous results of large character sums nxχ(n)\sum_{n\le x}\chi(n).Comment: 11 pages
    corecore